Skip to content

Roundtable Discussion: Debating Legal Personhood for Artificial Intelligence

An Overview of the Foundational Paper and Its Societal Context

The rapid advancement of artificial intelligence (AI), particularly generative models capable of producing human-like text and images, has ignited a complex debate over the legal and moral status of these systems. At the heart of this discussion is the question of "legal personhood"—a status that grants an entity certain rights and responsibilities under the law. Historically, this designation has been extended beyond human beings to entities like corporations, ships, and even natural ecosystems, enabling them to own property, enter contracts, and be held accountable in court. The proposal to extend a similar status to AI has sparked significant contention.

Proponents argue that as AI systems become more autonomous and sophisticated, granting them a form of personhood could be necessary for assigning liability and managing their integration into society. Opponents, however, warn that this move is dangerously premature and risks distracting from the immediate, tangible harms AI systems are already inflicting on people. These harms manifest in everyday life through biased algorithms that influence decisions in hiring, loan applications, criminal justice, and healthcare, often reinforcing and amplifying existing societal inequities. The contention arises from a fundamental clash of priorities: should society focus on the speculative future rights of machines, or on the present-day civil rights of humans affected by those machines?

It is within this context that Dr. Brandeis Marshall’s opinion paper, "No legal personhood for AI," serves as a foundational text. Marshall argues unequivocally that any discussion of granting AI civil rights is premature. The primary focus, she contends, must be on first building a robust social and legal framework that protects the civil rights of all humans impacted by AI. Marshall deconstructs the nature of current AI, defining it not as a thinking, reasoning entity, but as a sophisticated pattern-recognition tool that lacks a moral compass, contextual awareness, or genuine agency. She critiques the tech industry’s "move fast and break things" ethos, asserting that it has led to the reckless deployment of systems that replicate and deepen historical injustices.

Marshall’s argument is rooted in a civil rights analysis, highlighting that legal personhood for humans is not a monolith; many marginalized groups have fought for centuries and continue to fight for the full realization of their rights. To grant personhood to AI before these human struggles are resolved would, in her view, further entrench these inequalities. Instead of contemplating AI rights, she calls for the establishment of an AI responsibility framework with concrete mechanisms for transparency (such as her "AI Dependency Spectrum" to label the extent of AI involvement in a task) and accountability (including punitive measures like "tech probationary jail" and "algorithmic destruction"). Her core message is a call to order: solve for human equity and safety first, before considering the legal status of our tools.

Introduction to the Panelists

The roundtable brought together a diverse group of experts to discuss the themes raised in Dr. Marshall’s paper.

  • Dr. Maya Ellison: A data justice scholar and computer scientist who advocates for a "civil-rights-first" agenda, focusing on algorithmic audits and strengthening anti-discrimination law.
  • Prof. Lucas Romero: A corporate law professor who explores using limited, instrumental legal statuses for AI not to grant rights, but to clarify liability and allocate risk, drawing parallels to corporate personhood.
  • Dr. Anika Banerjee: A philosopher of mind and cognitive science who analyzes the criteria for consciousness and agency, arguing that current AI systems fail to meet the necessary conditions for personhood.
  • Judge (ret.) Helena Cho: A retired appellate judge and administrative law scholar focused on designing pragmatic legal institutions that ensure human accountability for AI systems.
  • Dr. Kwame Mensah: A political economist who examines how AI impacts labor markets and power dynamics, arguing that AI personhood would primarily serve to entrench corporate power.
  • Aiyana Redcloud, JD: An Indigenous legal scholar who offers a comparative perspective on personhood, contrasting the Western, rights-based model with Indigenous relational models that prioritize responsibility.
  • Dr. Petra Novak: A robotics and safety engineer who approaches the problem from a systems-safety perspective, emphasizing technical solutions like traceability, hazard analysis, and enforced fallback modes.
  • Samir Haddad: A disability rights technologist who provides a nuanced view on governance, advocating for co-design and warning that overly broad restrictions on AI could harm disabled people who rely on assistive technologies.

A Detailed Account of the Discussion

The roundtable discussion, moderated around Dr. Brandeis Marshall’s paper, revealed a broad consensus on the central thesis that granting legal personhood to current AI is unwarranted and premature. However, the panelists explored the issue from distinct disciplinary angles, leading to a rich dialogue on the nature of accountability, the limitations of existing legal frameworks, and the urgent need for practical governance mechanisms.

Affirming the "Civil Rights First" Imperative

The discussion opened with strong agreement with Dr. Marshall’s core argument that the priority must be on mitigating AI’s impact on human civil rights. Dr. Maya Ellison framed this as the essential starting point, stating, "The debate over AI personhood is a dangerous distraction from the algorithmic harms already being inflicted on marginalized communities." She argued that the same systems that perpetuate bias in housing and employment are now being considered for legal status, a move she described as "an inversion of justice." Judge (ret.) Helena Cho concurred from a legal-institutional perspective. She noted that U.S. law has a long history of a single statute having disparate impacts on different groups, as Marshall’s paper highlights. Therefore, she argued, "any framework for AI governance must embed equity review at its core, long before we entertain abstract questions about machine rights."

Dr. Kwame Mensah connected this civil rights imperative directly to economic power. He contended that the push for AI personhood is not a philosophical curiosity but an economic strategy. "It’s a move to shield corporations from liability," he stated. "By designating an AI as a legal person, a company can externalize responsibility for wage discrimination, unsafe working conditions, or mass layoffs onto a non-human entity." He proposed that a more constructive path would be to establish a "Worker Algorithmic Rights Act" that would give labor a say in how AI is deployed and ensure human accountability remains with employers.

The panel then delved into the concept of personhood itself. Dr. Anika Banerjee provided a philosophical grounding, cautioning against a fundamental "category mistake." She explained, "We are conflating linguistic competence with genuine understanding. Today's AI is a masterful mimic, a pattern-matching engine, but it lacks the phenomenal consciousness, stable intentions, or unified agency that are the preconditions for moral patiency, let alone legal personhood." While she outlined a potential future research roadmap to test for these qualities, she stressed that such speculation should not inform current policy.

Expanding on the legal construction of personhood, Aiyana Redcloud, JD, challenged the panel to look beyond the Western legal tradition. "The debate is framed around a monolithic, individualistic concept of personhood based on rights," she observed. "In many Indigenous legal systems, personhood is relational and begins with responsibility, not rights. We recognize rivers and ecosystems as persons because we have duties to them, and they to us." From this perspective, she argued, granting personhood to AI—an artifact "built from capital and code with no inherent connection to land or community"—would simply replicate a colonial hierarchy that prioritizes manufactured objects over living, relational entities.

Professor Lucas Romero approached the topic from the perspective of corporate law, which has long experience with non-human legal persons. He agreed that full, rights-bearing personhood for AI was a non-starter. However, he argued that the current legal vacuum creates a "liability gap" where developers, deployers, and users can all deflect responsibility. To solve this, he proposed a narrow, instrumental status of "registered autonomous systems." "This is not about giving AI rights," he clarified. "It's a legal fiction designed to anchor human accountability. It would create a legal entity that can be sued, must be insured, and is subject to regulatory oversight, with clear lines of responsibility leading back to its owners and creators."

Forging Practical Frameworks for Accountability and Governance

Professor Romero’s proposal sparked a vigorous discussion on concrete mechanisms for governance, with many panelists building on the solutions offered in Dr. Marshall’s paper. Judge Cho endorsed the concepts of "algorithmic destruction" and "tech probationary jail," framing them as necessary administrative remedies. "For a public agency using a biased algorithm to determine benefits," she explained, "there must be a due process-compatible path for a court or regulator to order the system suspended or permanently dismantled." She also saw Marshall’s "AI Dependency Spectrum" as a vital tool for administrative transparency, enabling public oversight of the extent to which automated systems are making critical decisions.

From an engineering standpoint, Dr. Petra Novak translated these legal concepts into a safety playbook. "What the law calls 'probationary jail,' we in systems safety call sandboxing or enforced fallback modes," she said. "What is called 'destruction,' we might implement as an irreversible 'circuit breaker' or kill switch." She advocated for a framework of mandatory third-party audits, end-to-end traceability in AI decision chains, and post-market surveillance, akin to how the FDA regulates medical devices. She stressed that accountability must be designed into the system from the start, not bolted on as an afterthought.

This focus on design and implementation brought Samir Haddad’s perspective to the forefront. While he supported strong accountability measures, he cautioned against blunt instruments. "For many disabled people, AI-powered assistive technologies are a gateway to autonomy and communication," he noted. "A blanket ban or a poorly designed kill switch could inadvertently silence or isolate someone." He argued for a layered accountability model that includes developers, deployers, and operators, with remedies focused not just on punishment but on restoration of service and accommodation. His central plea was for the mandatory inclusion of disabled stakeholders in the design and governance of AI systems, ensuring that safety mechanisms empower rather than disempower users.

The discussion concluded by returning to the central theme of focusing on the present human reality. The panelists unanimously agreed that the path forward lies not in elevating machines but in reinforcing the legal and social structures that protect people. Whether through the lens of data justice, institutional design, engineering safety, or Indigenous law, the conversation affirmed Dr. Marshall's call to action: to first fulfill the promise of comprehensive civil rights for all human beings, creating a society in which technological innovation serves humanity, rather than the other way around.

AI-generated content. Always review for accuracy.